Current state-of-the-art summarization models are trained with either maximum likelihood estimation (MLE) or reinforcement learning (RL). In this study, we investigate the third training paradigm and argue that inverse reinforcement learning (IRL) may be more suitable for text summarization. IRL focuses on estimating the reward function of an agent, given a set of observations of that agent's behavior. Generally, IRL provides advantages in situations where the reward function is not explicitly known or where it is difficult to define or interact with the environment directly. These situations are exactly what we observe in summarization. Thus, we introduce inverse reinforcement learning into text summarization and define a suite of sub-rewards that are important for summarization optimization. By simultaneously estimating the reward function and optimizing the summarization agent with expert demonstrations, we show that the model trained with IRL produces summaries that closely follow human behavior, in terms of better ROUGE, coverage, novelty, compression ratio and factuality when compared to the baselines trained with MLE and RL.
translated by 谷歌翻译
How to solve the data scarcity problem for end-to-end speech-to-text translation (ST)? It's well known that data augmentation is an efficient method to improve performance for many tasks by enlarging the dataset. In this paper, we propose Mix at three levels for Speech Translation (M^3ST) method to increase the diversity of the augmented training corpus. Specifically, we conduct two phases of fine-tuning based on a pre-trained model using external machine translation (MT) data. In the first stage of fine-tuning, we mix the training corpus at three levels, including word level, sentence level and frame level, and fine-tune the entire model with mixed data. At the second stage of fine-tuning, we take both original speech sequences and original text sequences in parallel into the model to fine-tune the network, and use Jensen-Shannon divergence to regularize their outputs. Experiments on MuST-C speech translation benchmark and analysis show that M^3ST outperforms current strong baselines and achieves state-of-the-art results on eight directions with an average BLEU of 29.9.
translated by 谷歌翻译
Graph anomaly detection (GAD) is a vital task in graph-based machine learning and has been widely applied in many real-world applications. The primary goal of GAD is to capture anomalous nodes from graph datasets, which evidently deviate from the majority of nodes. Recent methods have paid attention to various scales of contrastive strategies for GAD, i.e., node-subgraph and node-node contrasts. However, they neglect the subgraph-subgraph comparison information which the normal and abnormal subgraph pairs behave differently in terms of embeddings and structures in GAD, resulting in sub-optimal task performance. In this paper, we fulfill the above idea in the proposed multi-view multi-scale contrastive learning framework with subgraph-subgraph contrast for the first practice. To be specific, we regard the original input graph as the first view and generate the second view by graph augmentation with edge modifications. With the guidance of maximizing the similarity of the subgraph pairs, the proposed subgraph-subgraph contrast contributes to more robust subgraph embeddings despite of the structure variation. Moreover, the introduced subgraph-subgraph contrast cooperates well with the widely-adopted node-subgraph and node-node contrastive counterparts for mutual GAD performance promotions. Besides, we also conduct sufficient experiments to investigate the impact of different graph augmentation approaches on detection performance. The comprehensive experimental results well demonstrate the superiority of our method compared with the state-of-the-art approaches and the effectiveness of the multi-view subgraph pair contrastive strategy for the GAD task.
translated by 谷歌翻译
问答(QA)在回答定制域中的问题方面表现出了令人印象深刻的进展。然而,域的适应性仍然是质量检查系统最难以捉摸的挑战之一,尤其是当质量检查系统在源域中训练但部署在不同的目标域中时。在这项工作中,我们调查了问题分类对质量检查域适应的潜在好处。我们提出了一个新颖的框架:问题回答的问题分类(QC4QA)。具体而言,采用问题分类器将问题类分配给源数据和目标数据。然后,我们通过伪标记以自我监督的方式进行联合培训。为了优化,源和目标域之间的域间差异通过最大平均差异(MMD)距离降低。我们还最大程度地减少了同一问题类别的质量质量适应性表现的QA样本中的类内部差异。据我们所知,这是质量检查域适应中的第一部作品,以通过自我监督的适应来利用问题分类。我们证明了拟议的QC4QA的有效性,并在多个数据集上针对最先进的基线进行了一致的改进。
translated by 谷歌翻译
我们提出了一个基于串联弹性执行器(SEA)的平行按摩机器人,提供统一的力量控制方法。首先,建立了运动和静态力模型,以获得相应的控制变量。然后,提出了一种新型的力位控制策略,以在不需要机器人动力学模型的情况下分别控制沿表面正常方向的力位和另一个两方向位移。为了评估其性能,我们实施了一系列机器人按摩实验。结果表明,所提出的按摩操纵器可以成功实现按摩任务的所需力和运动模式,从而达到高得分用户体验。
translated by 谷歌翻译
尽管最近在改善错误信息检测系统的性能方面取得了进展,但在看不见的领域中进行错误信息进行分类仍然是一个难以捉摸的挑战。为了解决这个问题,一种常见的方法是引入域名评论家并鼓励域不变的输入功能。但是,早期的错误信息通常证明了针对现有的错误信息数据(例如,COVID-19数据集中的类不平衡)的条件和标签转移,这使得这种方法在检测早期错误信息方面的有效性较小。在本文中,我们提出了早期错误信息检测(CANMD)的对比适应网络。具体而言,我们利用伪标签来生成高信心的目标示例,用于与源数据的联合培训。我们还设计了标签校正成分,以估算和校正源和目标域之间的标签移动(即类先验)。此外,对比度适应损失已集成在目标函数中,以减少类内部差异并扩大阶层间差异。因此,改编的模型学习了校正的类先验和两个域之间不变的条件分布,以改善目标数据分布的估计。为了证明所提出的CANMD的有效性,我们研究了Covid-19的早期错误信息检测的案例,并使用多个现实世界数据集进行了广泛的实验。结果表明,与最先进的基线相比,CANMD可以有效地将错误信息检测系统适应不见的Covid-19目标域,并有显着改进。
translated by 谷歌翻译
开放信息提取是一个重要的NLP任务,它针对从非结构化文本中提取结构化信息的目标,而无需限制关系类型或文本域。该调查文件涵盖了2007年至2022年的开放信息提取技术,重点是以前的调查未涵盖的新模型。我们从信息角度来源提出了一种新的分类方法,以适应最近的OIE技术的开发。此外,我们根据任务设置以及当前流行的数据集和模型评估指标总结了三种主要方法。鉴于全面的审查,从数据集,信息来源,输出表格,方法和评估指标方面显示了几个未来的方向。
translated by 谷歌翻译
本文介绍了我们DFGC 2022竞赛的摘要报告。深层味道正在迅速发展,现实的面部折叠变得越来越欺骗性和难以检测。相反,检测深击的方法也正在改善。 Deepfake创作者和防守者之间有两党的比赛。这项竞赛提供了一个通用平台,用于基准在DeepFake创建和检测方法中当前最新的游戏之间的游戏。这场比赛要回答的主要研究问题是彼此竞争时两个对手的现状。这是去年DFGC 2021之后的第二版,具有新的,更多样化的视频数据集,更现实的游戏设置以及更合理的评估指标。通过这项竞争,我们旨在激发研究思想,以建立对深层威胁的更好的防御能力。我们还发布了我们的参与者和我们自己的DFGC 2022数据集,以丰富研究社区的DeepFake数据资源(https://github.com/nice-x/dfgc-2022)。
translated by 谷歌翻译
文本编辑模型最近已成为单语文本生成任务(例如语法误差校正,简化和样式传输)的SEQ2SEQ模型的突出替代方法。这些任务具有共同的特征 - 它们在源文本和目标文本之间表现出大量的文本重叠。文本编辑模型利用了此观察结果,并通过预测应用于源序列的编辑操作来学会生成输出。相比之下,Seq2Seq模型从头开始生成逐字输出,从而使它们在推理时间缓慢。文本编辑模型比SEQ2SEQ模型提供了多个好处,包括更快的推理速度,更高的样本效率以及对输出的更好的控制和解释性。本教程提供了有关文本编辑模型和当前最新方法的全面概述,并分析了他们的利弊。我们讨论了与生产化有关的挑战,以及如何使用这些模型来减轻幻觉和偏见,这两者都在文本生成领域遇到了紧迫的挑战。
translated by 谷歌翻译
从\ emph {nocedended}点云中重建3D几何形状可以使许多下游任务受益。最近的方法主要采用神经网络的神经形状表示,以代表签名的距离字段,并通过无签名的监督适应点云。但是,我们观察到,使用未签名的监督可能会导致严重的歧义,并且通常会导致\ emph {意外}故障,例如在重建复杂的结构并与重建准确的表面斗争时,在自由空间中产生不希望的表面。为了重建一个更好的距离距离场,我们提出了半签名的神经拟合(SSN拟合),该神经拟合(SSN拟合)由半签名的监督和基于损失的区域采样策略组成。我们的关键见解是,签名的监督更具信息性,显然可以轻松确定对象之外的区域。同时,提出了一种新颖的重要性抽样,以加速优化并更好地重建细节。具体而言,我们将对象空间弹并分配到\ emph {sign-newand}和\ emph {sign-unawern}区域,其中应用了不同的监督。此外,我们根据跟踪的重建损失自适应地调整每个体素的采样率,以便网络可以更多地关注复杂的拟合不足区域。我们进行了广泛的实验,以证明SSN拟合在多个数据集的不同设置下实现最新性能,包括清洁,密度变化和嘈杂的数据。
translated by 谷歌翻译